Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts under the zero-shot setting, where the relation sets at the training and testing stages are disjoint. Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples, which increases the training cost and severely constrains the model performance. To address the above issues, we propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation Selection and Entity Boundary Detection. The remarkable characteristic of PCRED is that it does not rely on additional data and still achieves promising performance. The model adopts a relation-first paradigm, recognizing unseen relations through candidate relation selection. With this approach, the semantics of relations are naturally infused in the context. Entities are extracted based on the context and the semantics of relations subsequently. We evaluate our model on two ZeroRTE datasets. The experiment results show that our method consistently outperforms previous works. Our code will be available at https://anonymous.4open.science/r/PCRED.
translated by 谷歌翻译
在模板和搜索区域之间学习强大的功能匹配对于3D暹罗跟踪至关重要。暹罗功能匹配的核心是如何在模板和搜索区域之间的相应点上分配高特征相似性,以进行精确的对象本地化。在本文中,我们提出了一个新颖的点云登记驱动的暹罗跟踪框架,直觉是空间对齐相应点(通过3D注册)倾向于实现一致的特征表示。具体而言,我们的方法由两个模块组成,包括特定于特定的非局部注册模块和一个注册辅助的sindhorn模板 - 特征聚合模块。登记模块在模板和搜索区域之间的精确空间对齐中进行目标。提出了跟踪特异性的空间距离约束,以优化非局部模块中的交叉注意权重,以进行判别特征学习。然后,我们使用加权SVD来计算模板和搜索区域之间的刚性转换,并对齐它们以实现所需的空间对齐相应点。对于特征聚合模型,我们将转换模板和搜索区域之间的特征匹配作为最佳传输问题,并利用Sinkhorn优化来搜索异常型匹配匹配解决方案。同样,建造了登记辅助空间距离图,以改善无法区分的区域(例如光滑的表面)的匹配鲁棒性。最后,在获得的功能匹配地图的指导下,我们将目标信息从模板中汇总到搜索区域中以构建特定于目标的特征,然后将其馈送到一个类似中心点的检测头中以进行对象定位。关于Kitti,Nuscenes和Waymo数据集的广泛实验验证了我们提出的方法的有效性。
translated by 谷歌翻译
基于暹罗网络的跟踪器将3D单一对象跟踪作为模板和搜索区域的点特征之间的互相关学习。由于跟踪过程中模板和搜索区域之间的外观差异很大,因此如何学习它们之间的稳健跨相关性以识别搜索区域中的潜在目标仍然是一个挑战性的问题。在本文中,我们明确使用变压器形成一个3D Siamese变压器网络,以学习模板和点云的搜索区域之间的强大互相关。具体来说,我们开发了一个暹罗点变压器网络,以了解目标的形状上下文信息。它的编码器使用自我注意力来捕获点云的非本地信息来表征对象的形状信息,而解码器则利用交叉注意来提取歧视点特征。之后,我们开发了一个迭代的粗到加密相关网络,以了解模板与搜索区域之间的稳健跨相关性。它通过交叉注意将模板与搜索区域中的潜在目标联系起来,制定了交叉功能的增强。为了进一步增强潜在目标,它采用了自我功能增强,该增强功能将自我注意力应用于特征空间的本地K-NN图来汇总目标特征。 Kitti,Nuscenes和Waymo数据集的实验表明,我们的方法在3D单一对象跟踪任务上实现了最先进的性能。
translated by 谷歌翻译
大多数怀孕和出生会导致良好的结果,但是并不常见,当发生时,它们可能会与母亲和婴儿的严重影响相关。预测建模有可能通过更好地理解风险因素,增强监视以及更及时,更适当的干预措施来改善结果,从而帮助产科医生提供更好的护理。对于三种类型的并发症,我们使用可解释的提升机(EBM)(玻璃箱模型)来识别和研究最重要的风险因素,以获得清晰度:(i)严重的孕妇发病率(SMM),(ii)(iii)早产启示性。在使用EBM的解释性来揭示出对风险促成的特征的惊人见解时,我们的实验表明EBM与其他黑盒ML方法(例如深神经网和随机森林)的准确性相匹配。
translated by 谷歌翻译
自我关注在捕获远程关系时,在提高视觉任务的表现,例如图像分类和图像标题等方面,突出的能力。然而,自我关注模块高度依赖于查询键值特征之间的点产品乘法和维度对齐,这导致两个问题:(1)点产品乘法导致穷举和冗余计算。 (2)由于视觉特征图通常出现作为多维张量,重塑张量特征的尺度,以适应尺寸对齐可能会破坏张量特征图的内部结构。为了解决这些问题,本文提出了一种具有其变体的自我关注插入模块,即合成张量变换(STT),用于直接处理图像张量特征。如果在查询键值之间计算点 - 产品乘法,则基本STT由张量转换组成,以从视觉信息中学习合成注意力。 STT系列的有效性在图像分类和图像标题上验证。实验表明,建议的STT实现了竞争性能,同时保持鲁棒性与基于视觉任务的自我关注相比。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
In contrast to the control-theoretic methods, the lack of stability guarantee remains a significant problem for model-free reinforcement learning (RL) methods. Jointly learning a policy and a Lyapunov function has recently become a promising approach to ensuring the whole system with a stability guarantee. However, the classical Lyapunov constraints researchers introduced cannot stabilize the system during the sampling-based optimization. Therefore, we propose the Adaptive Stability Certification (ASC), making the system reach sampling-based stability. Because the ASC condition can search for the optimal policy heuristically, we design the Adaptive Lyapunov-based Actor-Critic (ALAC) algorithm based on the ASC condition. Meanwhile, our algorithm avoids the optimization problem that a variety of constraints are coupled into the objective in current approaches. When evaluated on ten robotic tasks, our method achieves lower accumulated cost and fewer stability constraint violations than previous studies.
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
Off-Policy evaluation (OPE) is concerned with evaluating a new target policy using offline data generated by a potentially different behavior policy. It is critical in a number of sequential decision making problems ranging from healthcare to technology industries. Most of the work in existing literature is focused on evaluating the mean outcome of a given policy, and ignores the variability of the outcome. However, in a variety of applications, criteria other than the mean may be more sensible. For example, when the reward distribution is skewed and asymmetric, quantile-based metrics are often preferred for their robustness. In this paper, we propose a doubly-robust inference procedure for quantile OPE in sequential decision making and study its asymptotic properties. In particular, we propose utilizing state-of-the-art deep conditional generative learning methods to handle parameter-dependent nuisance function estimation. We demonstrate the advantages of this proposed estimator through both simulations and a real-world dataset from a short-video platform. In particular, we find that our proposed estimator outperforms classical OPE estimators for the mean in settings with heavy-tailed reward distributions.
translated by 谷歌翻译